Видео с ютуба Smollm 135M
Fine-tuned Hugging Face Smollm-135M on HF ultrafeedbck does inference on 8 CPUs
Testing TINY and FAST local LLM SmolLM2 135 Million parameter model
Инструкция SmolLM 135M — миниатюрная и быстрая модель ИИ — локальная установка
SmolLM: The Rise of Smol Models
This New LLM Model Fell Short of Expectations | SmollM
SmolLM is Now Live on OpenxAI!
failed to convert SmolLM 135M model to tflite Error GitHub
Run Ollama on GT 1030 (Linux Mint + Old PC)
Quantized fine-tuned Smollm-135 FP16 model does inference on 8 CPUs
Run Smollm 135M LLM Locally in Minutes! (No GPU Needed)
SmolLM3 Looked Promising... Until We Tested It
⚡ Open Model Pretraining Masterclass — Elie Bakouch, HuggingFace SmolLM 3, FineWeb, FinePDF
Fine Tuning a model
Hugging Face Releases SmolLM3: A 3B Long-Context, Multilingual Reasoning Model
Tiny LLMs: Faire tourner un LLM sur un RPI 3A+ avec Ollama et un peu de RAG
Qwen Edit All in One - Inpaint Outpaint Controlnet